356 research outputs found

    Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing

    Full text link
    Deep neural networks (DNN) have been shown to be useful in a wide range of applications. However, they are also known to be vulnerable to adversarial samples. By transforming a normal sample with some carefully crafted human imperceptible perturbations, even highly accurate DNN make wrong decisions. Multiple defense mechanisms have been proposed which aim to hinder the generation of such adversarial samples. However, a recent work show that most of them are ineffective. In this work, we propose an alternative approach to detect adversarial samples at runtime. Our main observation is that adversarial samples are much more sensitive than normal samples if we impose random mutations on the DNN. We thus first propose a measure of `sensitivity' and show empirically that normal samples and adversarial samples have distinguishable sensitivity. We then integrate statistical hypothesis testing and model mutation testing to check whether an input sample is likely to be normal or adversarial at runtime by measuring its sensitivity. We evaluated our approach on the MNIST and CIFAR10 datasets. The results show that our approach detects adversarial samples generated by state-of-the-art attacking methods efficiently and accurately.Comment: Accepted by ICSE 201

    Proving Expected Sensitivity of Probabilistic Programs with Randomized Variable-Dependent Termination Time

    Get PDF
    The notion of program sensitivity (aka Lipschitz continuity) specifies that changes in the program input result in proportional changes to the program output. For probabilistic programs the notion is naturally extended to expected sensitivity. A previous approach develops a relational program logic framework for proving expected sensitivity of probabilistic while loops, where the number of iterations is fixed and bounded. In this work, we consider probabilistic while loops where the number of iterations is not fixed, but randomized and depends on the initial input values. We present a sound approach for proving expected sensitivity of such programs. Our sound approach is martingale-based and can be automated through existing martingale-synthesis algorithms. Furthermore, our approach is compositional for sequential composition of while loops under a mild side condition. We demonstrate the effectiveness of our approach on several classical examples from Gambler's Ruin, stochastic hybrid systems and stochastic gradient descent. We also present experimental results showing that our automated approach can handle various probabilistic programs in the literature

    Application of stochastic differential equations to option pricing

    Get PDF
    The financial world is a world of random things and unpredictable events. Along with the innovative development of diversity and complexity in modern financial market, there are more and more financial derivative emerged in the financial industry in order to gain higher yields as well as hedge the risk . As a result, to price the derivative , indeed the future uncertainty, become an interesting topic in the field of mathematical finance and financial quantitative analysis. In this thesis, I mainly focus on the application of stochastic differential equations to option pricing. Based on the arbitrage-free and risk-neutral assumption, I used the stochastic differential equations theory to solve the pricing problem for the European option of which underlying assets can be described by a geometric Brownian motion. The thesis explores the Black-Scholes model and forms an optimal control problem for the volatility that is an essential parameter in the Black-Scholes formula. Furthermore, the application of backward stochastic differential equations (BSDEs) has been discussed. I figured that BSDEs can model the pricing problem in a more clarifying and logical way. Also, based on the model discussed in the thesis, I provided a case study on pricing a Chinese option-like deposit product by using Mathematica, that shows the feasibility and applicability for the option pricing method based on stochastic differential equations

    New taxonomic definition of the genus Neucentropus Martynov (Trichoptera: Polycentropodidae)

    Get PDF
    The genera Neucentropus Martynov and Kyopsyche Tsuda constitute a monophyletic group, such that Kyopsyche is a new synonym of Neucentropus and the type species of Kyopsyche, Kyopsyche japonica Tsuda 1942, is transferred into Neucentropus (new combination)

    Proving expected sensitivity of probabilistic programs with randomized variable-dependent termination time

    Get PDF
    The notion of program sensitivity (aka Lipschitz continuity) specifies that changes in the program input result in proportional changes to the program output. For probabilistic programs the notion is naturally extended to expected sensitivity. A previous approach develops a relational program logic framework for proving expected sensitivity of probabilistic while loops, where the number of iterations is fixed and bounded. In this work, we consider probabilistic while loops where the number of iterations is not fixed, but randomized and depends on the initial input values. We present a sound approach for proving expected sensitivity of such programs. Our sound approach is martingale-based and can be automated through existing martingale-synthesis algorithms. Furthermore, our approach is compositional for sequential composition of while loops under a mild side condition. We demonstrate the effectiveness of our approach on several classical examples from Gambler's Ruin, stochastic hybrid systems and stochastic gradient descent. We also present experimental results showing that our automated approach can handle various probabilistic programs in the literature

    Template-Based Static Posterior Inference for Bayesian Probabilistic Programming

    Full text link
    In Bayesian probabilistic programming, a central problem is to estimate the normalised posterior distribution (NPD) of a probabilistic program with conditioning. Prominent approximate approaches to address this problem include Markov chain Monte Carlo and variational inference, but neither can generate guaranteed outcomes within limited time. Moreover, most existing formal approaches that perform exact inference for NPD are restricted to programs with closed-form solutions or bounded loops/recursion. A recent work (Beutner et al., PLDI 2022) derived guaranteed bounds for NPD over programs with unbounded recursion. However, as this approach requires recursion unrolling, it suffers from the path explosion problem. Furthermore, previous approaches do not consider score-recursive probabilistic programs that allow score statements inside loops, which is non-trivial and requires careful treatment to ensure the integrability of the normalising constant in NPD. In this work, we propose a novel automated approach to derive bounds for NPD via polynomial templates. Our approach can handle probabilistic programs with unbounded while loops and continuous distributions with infinite supports. The novelties in our approach are three-fold: First, we use polynomial templates to circumvent the path explosion problem from recursion unrolling; Second, we derive a novel multiplicative variant of Optional Stopping Theorem that addresses the integrability issue in score-recursive programs; Third, to increase the accuracy of the derived bounds via polynomial templates, we propose a novel technique of truncation that truncates a program into a bounded range of program values. Experiments over a wide range of benchmarks demonstrate that our approach is time-efficient and can derive bounds for NPD that are comparable with (or tighter than) the recursion-unrolling approach (Beutner et al., PLDI 2022)

    BBReach: Tight and Scalable Black-Box Reachability Analysis of Deep Reinforcement Learning Systems

    Full text link
    Reachability analysis is a promising technique to automatically prove or disprove the reliability and safety of AI-empowered software systems that are developed by using Deep Reinforcement Learning (DRL). Existing approaches suffer however from limited scalability and large overestimation as they must over-approximate the complex and almost inexplicable system components, namely deep neural networks (DNNs). In this paper we propose a novel, tight and scalable reachability analysis approach for DRL systems. By training on abstract states, our approach treats the embedded DNNs as black boxes to avoid the over-approximation for neural networks in computing reachable sets. To tackle the state explosion problem inherent to abstraction-based approaches, we devise a novel adjacent interval aggregation algorithm which balances the growth of abstract states and the overestimation caused by the abstraction. We implement a tool, called BBReach, and assess it on an extensive benchmark of control systems to demonstrate its tightness, scalability, and efficiency
    corecore